Skip to content

Conversation

@okane16
Copy link
Collaborator

@okane16 okane16 commented Nov 19, 2025

Note

Adds cluster-aware ClickHouse support (ON CLUSTER), PRIMARY KEY expressions, Iceberg S3 engine, Python MooseModel with column autocomplete, schema-driven JSON parsing/offset commit in streaming, and extensive docs/CLI/website updates.

  • OLAP/Engines:
    • Add cluster support for ON CLUSTER DDL; validate against explicit keeper_path/replica_name; serialize cluster in infra map.
    • Support primary_key_expression overriding column Key<T>; include in serialization.
    • Add read‑only IcebergS3 engine (TS/Py) with validation.
  • Streaming:
    • Replace generic date reviver with schema‑driven JSON field mutations (datetime parsing), and commit offsets only after producer flush (at‑least‑once).
  • Python SDK:
    • Introduce MooseModel for LSP column autocomplete; Column now formats/quotes for SQL; add DateTime precision annotations and tests.
  • TypeScript SDK:
    • Add DateTimeString/DateTime64String types; expose cluster/primaryKeyExpression in configs; include in infra export.
  • CLI/Docs/Website:
    • New moose query command docs; large docs reorg (migration modes/lifecycle, guides, TOC/filters), domain/sitemaps updates, build/runtime tweaks.
  • Templates/Build:
    • Add TS/Py cluster test templates and experimental Py template; package templates in build; dependency and config updates (tracing logs, ClickHouse client, Biome).

Written by Cursor Bugbot for commit 7c5fd09. This will update automatically on new commits. Configure here.

@vercel
Copy link

vercel bot commented Nov 19, 2025

The latest updates on your projects. Learn more about Vercel for GitHub.

Project Deployment Preview Comments Updated (UTC)
docs-v2 Ready Ready Preview Comment Dec 1, 2025 6:19pm
framework-docs Ready Ready Preview Comment Dec 1, 2025 6:19pm

Copy link
Contributor

@oatsandsugar oatsandsugar left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In summary, moose prod ... it isn't clear what prod is, does this affect boreal?

Decision matrix should likely be a diagram not a table, not super clear right now.

Image

^^ we should make the above more friendly to the user. The bullets aren't a list here, this should likely be prose. Or at least differentiate the first bullets with the last.

See Apply planned migrations (service) for boot order, failure handling, and log expectations.

What is log expectations? how does this change boot order?

FULLY_MANAGED — Moose may perform any required operation, including destructive changes. Default for dev environments.

Is this true? I was blocked from doing a destructive schema change in moose dev https://fiveonefour-workspace.slack.com/archives/C08BYKG5P7C/p1763499924516499

Seems like the whole text here needs review. I'm happy to approve to unblock other PRs, but we'll need to do some wordsmithing to clean up the form, structure and language of these docs.

@oatsandsugar
Copy link
Contributor

Screenshot 2025-11-19 at 12 47 36 PM why are we mixing docs and guide?

phiSgr and others added 19 commits November 29, 2025 11:51
old code fails when `target_table.order_by` == `target_table` primary
key == actual table primary key, actual table `order_by` == `[]`

<!-- CURSOR_SUMMARY -->
---

> [!NOTE]
> Adds `order_by_with_fallback` and uses it to compare/normalize ORDER
BY (fallback to primary key for MergeTree), fixing false diffs; updates
diff logic and tests.
> 
> - **Core Infrastructure
(`apps/framework-cli/src/framework/core/infrastructure/table.rs`)**:
> - Add `Table::order_by_with_fallback()` to derive ORDER BY from
primary keys when empty for MergeTree engines.
>   - Update `Table::order_by_equals()` to compare using fallback.
> - **Infra Map
(`apps/framework-cli/src/framework/core/infrastructure_map.rs`)**:
> - Replace ad-hoc ORDER BY diff logic with
`table.order_by_equals(target_table)`.
> - `InfrastructureMap::normalize()` now sets `table.order_by` via
`order_by_with_fallback()` when empty.
> - **Tests**:
> - Add test for ORDER BY equality with implicit primary key and for
non-MergeTree (S3) behavior.
>   - Adjust existing diff paths to rely on new equality/normalization.
> 
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
eaa16da. This will update automatically
on new commits. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->
<!-- CURSOR_SUMMARY -->
> [!NOTE]
> Introduce schema-driven datetime parsing to avoid JS Date truncation,
add string-based DateTime types, update runners for precision and
at-least-once processing, and add tests/docs.
> 
> - **TypeScript runtime/lib**:
> - Add `DateTimeString` and `DateTime64String<P>` types and export
them.
> - Introduce schema-driven JSON parsing (`utilities/json.ts`):
build/apply field mutations to parse only true DateTime fields; annotate
string-based date fields with `stringDate`.
> - Update `typeConvert` to mark `date-time` string fields with
`stringDate` and handle precisions.
> - Update streaming `runner` to precompute field mutations from source
stream columns and apply them per message; stop using global date
reviver.
> - Enhance `dmv2/internal.getStreamingFunctions` to return source
`Column[]` alongside handlers.
> - **Python runtime**:
> - Kafka consumer sets `enable_auto_commit=False`, flushes producer and
commits after processing; add per-message partition/offset logging.
> - **Templates/Tests**:
> - Add DateTime precision models/transforms for TS and PY; new e2e
tests verify microsecond/nanosecond handling and string preservation.
>   - Minor template updates (geometry/array/DLQ unaffected).
> - **Docs**:
> - Expand TS reference with Date/DateTime guidance, examples, and
comparison table for `DateTime*` types.
> 
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
8e8504e. This will update automatically
on new commits. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->
…gs (#3024)

much of these changes are formatting
if preferred i can undo those auto-applied formatting changes

<!-- CURSOR_SUMMARY -->
> [!NOTE]
> Introduces MooseModel for class-level column access with IDE
autocomplete and adds Column formatting for safe SQL interpolation, plus
tests and an experimental template demonstrating usage.
> 
> - **Library (dmv2)**:
> - **MooseModel**: New BaseModel subclass with metaclass adding
class-level `Column` descriptors and `.cols` namespace for LSP
autocomplete (`moose_lib/dmv2/moose_model.py`); exported via
`dmv2.__init__`.
> - **OlapTable**: Still exposes `.cols`; works with both `BaseModel`
and `MooseModel` and documents access patterns.
> - **Data Modeling**:
> - `Column` gains `__str__`/`__format__` to output quoted identifiers
for f-strings (e.g., `{col:col}`), enabling safe SQL interpolation
(`data_models.py`).
> - **Docs**:
>   - README adds "Column Autocomplete with MooseModel" usage snippet.
> - **Tests**:
> - Add suites covering `MooseModel` behavior, backward compatibility
with `BaseModel`, OLAP integration, and `Column` formatting.
> - **Templates**:
> - New `python-experimental` template demonstrating MooseModel
autocomplete in APIs, ingest models, views, and workflows, with setup
files and editor settings.
> 
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
33a9063. This will update automatically
on new commits. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->
Auto-generated release notes for November 22, 2025.

This PR adds:
- New release notes file: `2025-11-22.mdx`
- Updated `_meta.tsx` with new entry
- Updated `index.mdx` with link to new release notes

The release notes were automatically generated from commits across the
moosestack, registry, and commercial repositories.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

<!-- CURSOR_SUMMARY -->
---

> [!NOTE]
> Adds November 22, 2025 release notes and updates release notes index
and metadata.
> 
> - **Docs**:
> - **New page**:
`apps/framework-docs/src/pages/release-notes/2025-11-22.mdx` with
highlights:
>     - `moose query` CLI for SQL execution/formatting/code-gen
>     - ClickHouse cluster support with `ON CLUSTER` migrations
>     - Boreal database performance metrics visualization
>     - MooseStack and Boreal bug fixes/improvements
>   - **Navigation/Meta**:
> - Add `2025-11-22` entry to
`apps/framework-docs/src/pages/release-notes/_meta.tsx`
> - Add link to `2025-11-22` in
`apps/framework-docs/src/pages/release-notes/index.mdx`
> 
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
ec5acb5. This will update automatically
on new commits. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->

---------

Co-authored-by: Release Notes Bot <noreply@anthropic.com>
Co-authored-by: Dave Seleno <958603+onelesd@users.noreply.github.com>
Co-authored-by: Johanan Ottensooser <j@ottensooser.com>
<!-- CURSOR_SUMMARY -->
> [!NOTE]
> Add end-to-end IcebergS3 engine support (Rust CLI, TS/Py SDKs,
generators, and docs) with runtime cred resolution,
parsing/serialization, DDL, and tests.
> 
> - **OLAP Engine (Rust/CLI)**
> - Add `ClickhouseEngine::IcebergS3` with parsing (`Iceberg(...)`),
display/proto serialization (mask creds, omit in proto), DDL generation,
and non-alterable params hashing.
> - Resolve AWS creds at runtime for `IcebergS3` in
`InfrastructureMap::resolve_s3_credentials_from_env`, recalculating
`engine_params_hash`.
>   - Extend partial infra map to accept `IcebergS3` config.
> - **Code Generation**
> - TS/Py generators emit IcebergS3 engine blocks in generated configs.
> - **SDKs/Libraries**
> - Python: introduce `IcebergS3Engine`; update validators to forbid
ORDER BY/PARTITION BY/SAMPLE BY; wire into internal JSON export.
> - TypeScript: add `ClickHouseEngines.IcebergS3` and `IcebergS3Config`;
include in infra serialization.
> - **Docs**
> - Add IcebergS3 usage examples (TS/Py) and notes (read-only, supported
formats) in modeling tables docs.
> - **Tests**
> - Add comprehensive Rust, Python, and TypeScript tests for IcebergS3
parsing/serialization, hashing, validation, and config export.
> 
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
a21847a. This will update automatically
on new commits. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->
<!-- CURSOR_SUMMARY -->
> [!NOTE]
> Packages templates into the Nix output and relocates the moose-cli
binary with a wrapper so the CLI can resolve templates at the expected
path.
> 
> - **Nix flake (`flake.nix`)**:
>   - **Template packaging**:
> - Add `packages.template-packages` derivation to tar each template dir
and generate `manifest.toml`.
>   - **moose-cli output layout**:
> - Move real binary to `libexec/moose/moose-cli` and add a
`bin/moose-cli` wrapper.
> - Copy packaged templates into `$out/template-packages/` for runtime
discovery.
> - Retain build/linker tweaks (rdkafka dynamic-linking) and environment
setup.
> 
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
7866acb. This will update automatically
on new commits. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->
<!-- CURSOR_SUMMARY -->
> [!NOTE]
> Add end-to-end support for introspecting, diffing, reconciling, and
executing changes for ClickHouse views/materialized views (SQL
resources), including SQL normalization and dependency-aware DDL
ordering.
> 
> - **OLAP/ClickHouse**:
> - Implement `list_sql_resources` to introspect views/MVs from
`system.tables`; reconstruct lineage and setup/teardown.
> - Extend SQL parser with normalization, source table extraction, and
helpers; enable `sqlparser` visitor feature.
>   - Add TTL/SETTINGS parsing improvements and tests.
> - **Core Planning & Reality Check**:
> - Extend `InfraRealityChecker` with SQL resource discrepancies
(`unmapped/missing/mismatched_sql_resources`).
> - Update `reconcile_with_reality` to handle SQL resources and accept
`target_sql_resource_ids`.
> - Diff logic (`InfrastructureMap::diff_sql_resources`) and DDL
ordering updated; include MV population and dependency-aware
teardown/setup.
> - **APIs/CLI**:
> - Admin endpoints and serverless fetch paths pass table and SQL
resource targets; reconciled inframap now includes SQL resources.
> - Remote plan/migration routines updated to propagate SQL resource
IDs.
> - **Models/Proto**:
> - Add `database` to `SqlResource`; implement ID, equality (SQL
normalization), and (de)serialization.
> - **Misc**:
> - Python streaming runner: set Kafka consumer
`auto_offset_reset='earliest'`.
>   - Broad unit test additions across modules.
> 
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
9b80f40. This will update automatically
on new commits. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->

---------

Co-authored-by: George Leung <phisgr@gmail.com>
Wrote this today https://github.com/LucioFranco/safe-chain-nix
<!-- CURSOR_SUMMARY -->
> [!NOTE]
> <sup>[Cursor Bugbot](https://cursor.com/dashboard?tab=bugbot) is
generating a summary for commit
5ecd963. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->
…ient (#3025)

[clickhouse_rs](https://docs.rs/clickhouse-rs/latest/clickhouse_rs/)
doesn't support selecting LowCardinality(String) but the http client
does.

this refactors moose peek and moose query to use the official
[clickhouse](https://docs.rs/clickhouse/latest/clickhouse/) http client
which is better supported.


<!-- CURSOR_SUMMARY -->
---

> [!NOTE]
> Switches moose peek/query to the HTTP-based ClickHouse client with
JSON output and removes the deprecated clickhouse-rs client and related
code.
> 
> - **CLI (ClickHouse query path)**:
> - Replace native `clickhouse-rs` usage with HTTP `clickhouse` client
in `cli/routines/{peek,query}.rs`.
> - Add `infrastructure/olap/clickhouse_http_client.rs` with JSONEachRow
HTTP querying via `reqwest`.
> - Remove legacy `clickhouse_alt_client.rs`; update call sites and
error handling.
> - **OLAP ClickHouse module**:
> - Clean up `mod.rs` by dropping `clickhouse_rs`-specific code (e.g.,
`check_table_size`).
> - **Core/Planning**:
> - Remove `PlanningError::Clickhouse` variant tied to `clickhouse_rs`.
> - **Dependencies**:
> - Remove `clickhouse-rs` from `apps/framework-cli/Cargo.toml` (keep
`clickhouse` crate); lockfile updates accordingly.
> - **Docs**:
>   - Add Rust CLI debug invocation instructions in `AGENTS.md`.
> 
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
dddd524. This will update automatically
on new commits. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->
<!-- CURSOR_SUMMARY -->
> [!NOTE]
> Adds regex-based fallback to extract source tables (handling
ClickHouse array literals), refactors MV/view reconstruction into shared
logic, and strips backticks from MV TO targets with comprehensive tests.
> 
> - **Parser (sql_parser.rs)**:
> - Add `extract_source_tables_from_query_regex` regex fallback for
`FROM`/`JOIN` to handle ClickHouse-specific array literal syntax.
>   - Expose and use `LazyLock` regex `FROM_JOIN_TABLE_PATTERN`.
> - Keep existing AST-based `extract_source_tables_from_query` and
tests.
> - **View/MV Reconstruction (mod.rs)**:
> - Refactor into `reconstruct_sql_resource_common` shared logic for
views and materialized views.
> - Integrate parser fallback: try AST parser, then regex fallback with
`default_database`.
> - Strip backticks from MV `TO` target; preserve/qualify target IDs
correctly.
>   - Pass actual `database` (remove unused `_database`).
> - **Tests**:
> - Add tests covering regex fallback with array literals, MV target
backtick stripping, standard SQL paths, and default DB application.
> 
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
b6be521. This will update automatically
on new commits. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->
<!-- CURSOR_SUMMARY -->
> [!NOTE]
> Switches CLI flags to --clickhouse-url, adds robust ClickHouse URL
parsing/conversion, refactors codegen/packaging to use
project.source_dir, and updates docs accordingly.
> 
> - **CLI**
> - Rename flags to `--clickhouse-url` for `seed clickhouse` and `db
pull` (keep alias `--connection-string`).
> - Update prompts, logs, errors, and success messages to reference
ClickHouse URL.
>   - Add helpers to resolve/override serverless URLs from flags/env.
> - **ClickHouse URL Parsing**
> - Introduce `parse_clickhouse_connection_string_with_metadata` with
percent-decoding, native-to-HTTP(S) conversion notice, display-safe URL,
and database detection (queries `database()` when not explicit).
> - **Code Generation / DB Pull**
>   - Use new parsing + `create_client` in `create_client_and_db`.
> - External models writing generalized to accept `source_dir`; path
handling switched from `APP_DIR` to `project.source_dir`.
> - **Build & Docker Packager**
>   - Package copy lists replace `APP_DIR` with `project.source_dir`.
> - **Docs**
> - Update examples and references to `--clickhouse-url` across CLI,
local dev, getting started, and db-pull guides.
> 
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
65a29a3. This will update automatically
on new commits. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->
somehow `Type instantiation is excessively deep and possibly infinite`
happened today

<!-- CURSOR_SUMMARY -->
---

> [!NOTE]
> Migrates the TypeScript tests template to zod v4 with a stricter
output schema and disables the TS backward-compatibility E2E job in the
workflow/status checks.
> 
> - **Templates (TypeScript tests)**:
> - Switch `zod` import to `zod/v4` in
`templates/typescript-tests/src/apis/barExpressMcp.ts`.
> - Tighten `outputSchema.rows` to `z.array(z.record(z.string(),
z.any()))`.
> - **CI**:
> - Comment out `test-e2e-backward-compatibility-typescript` from
`changes` job `needs` and remove it from success/failure checks in
`.github/workflows/test-framework-cli.yaml`.
> 
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
8dc3d7e. This will update automatically
on new commits. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->
<!-- CURSOR_SUMMARY -->
> [!NOTE]
> Replaces fern/log with tracing + tracing-subscriber (with
EnvFilter/JSON/file rotation) and OTEL tracing appender, updating all
modules to use tracing macros.
> 
> - **Logging overhaul (CLI)**:
> - Replace `fern`/`log` with `tracing` + `tracing-subscriber`
(EnvFilter, JSON/text formats, optional modern/legacy formatter).
> - Implement OTEL export via `opentelemetry-appender-tracing`; add
batch log processor and resource labels.
> - Add date-based file writer compatible with previous naming; cleanup
old logs; session/machine IDs in events.
> - Introduce legacy format layer to match prior output; opt-in modern
formatting via env var.
> - **Codewide changes**:
> - Swap `log::{...}` macros for `tracing::{...}` across CLI, framework,
infra, and MCP modules; adjust some log levels (e.g., HTTP metrics to
trace).
> - **Dependencies**:
>   - Remove `fern`, `log`, and `opentelemetry-appender-log`.
> - Add `tracing-subscriber`, `tracing-serde`, and
`opentelemetry-appender-tracing`; update lockfile accordingly.
> 
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
936b9d4. This will update automatically
on new commits. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->
<!-- CURSOR_SUMMARY -->
> [!NOTE]
> Bumps `safe-chain-nix` in `flake.lock` to `c931beaa` (updated rev,
narHash, and timestamp).
> 
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
0171a30. This will update automatically
on new commits. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->
<!-- CURSOR_SUMMARY -->
> [!NOTE]
> Disables OTEL integration for the legacy logging format and marks
session/machine IDs unused.
> 
> - **Logger (`apps/framework-cli/src/cli/logger.rs`)**:
>   - **Legacy format**:
> - Remove OTEL layer wiring even when `settings.export_to` is set (no
`otel_layer` used).
> - Mark `session_id` and `machine_id` parameters as unused
(`_session_id`, `_machine_id`).
> 
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
c39bd4d. This will update automatically
on new commits. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->

---------

Co-authored-by: George Anderson <george.anderson@fiveonefour.com>
<!-- CURSOR_SUMMARY -->
> [!NOTE]
> Add elapsed-time logging by timing requests and passing start time to
`httpLogger` across all response paths.
> 
> - **Runtime/Logging**:
> - Update `httpLogger` to accept `startMs` and log request latency in
ms.
> - Capture `start = Date.now()` in `apiHandler` and `createMainRouter`
request handlers.
> - Invoke `httpLogger(req, res, start)` on all exits (auth failures,
success responses, errors, 404) in
`packages/ts-moose-lib/src/consumption-apis/runner.ts`.
> 
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
d8061a5. This will update automatically
on new commits. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->
<!-- CURSOR_SUMMARY -->
> [!NOTE]
> Adds first-class primary key expression support across OLAP pipelines,
generating/reading PRIMARY KEY in DDL, normalizing for diffs, and
exposing config in TS/Python SDKs with tests and docs.
> 
> - **OLAP/ClickHouse core**:
> - Add `primary_key_expression` to `Table`/ClickHouse models and
protobuf; include in (de)serialization and introspection.
> - Generate PRIMARY KEY in `create_table_query` (uses expression or
column flags) and expose in list_tables via SQL parsing
(`extract_primary_key_from_create_table`).
> - Normalize PKs (strip spaces/backticks/outer parens) and compare in
diffs; PK changes trigger drop+create.
> - **Code generation**:
> - TS/Python generators output
`primaryKeyExpression`/`primary_key_expression` in table configs and
switch key-wrapping logic to use it.
> - **SDK/Internal config**:
> - TS/Python SDKs add `primaryKeyExpression`/`primary_key_expression`
to `OlapTable` config and internal representations.
> - **Tests**:
> - Extensive unit tests for PK parsing/normalization/diff behavior and
DDL generation; e2e template tests validating PRIMARY KEY and ORDER BY
in DDL.
> - **Docs**:
> - New/updated docs detailing primary key expressions, constraints, and
examples in TS/Python, including ORDER BY prefix requirement.
> 
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
3bdd55e. This will update automatically
on new commits. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->
Refactored migration documentation including:
- Reorganized migration lifecycle documentation
- Added new pages for auto-inferred migrations and lifecycle modes
- Updated planned migrations and CLI documentation
- Added plan format reference
- Fixed broken links throughout migration docs
- Updated navigation structure

add copy disable to plan reference

draft
.with(env_filter)
.with(legacy_layer)
.init();
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Bug: OTEL export silently broken in default logging format

The setup_legacy_format function matches the export_to endpoint but never creates or uses an OTEL layer. The _endpoint, _session_id, and _machine_id parameters are marked as unused with underscore prefixes, and the code paths for if let Some(_endpoint) and else are functionally identical. This means when export_to is configured for OTEL log export, logs will silently not be exported when using the legacy format (which is the default since use_tracing_format defaults to false). Compare with setup_modern_format which correctly calls create_otel_layer(endpoint, session_id, machine_id) and adds it to the subscriber.

Fix in Cursor Fix in Web

})?;

// Stream results to stdout
let success_count = rows.len().min(limit as usize);
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Bug: CLI limit parameter not passed to ClickHouse query

The --limit parameter is documented as "Maximum number of rows to return (via ClickHouse settings)" but it's never actually passed to ClickHouse. The query_as_json_stream function is called without any limit, fetching all rows into memory first. The limit is then applied client-side when printing. For queries against infinite tables like system.numbers (used in the E2E test should respect limit parameter), this would cause the command to hang indefinitely trying to fetch infinite rows before applying the limit.

Fix in Cursor Fix in Web

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

10 participants